Executive Summary
On December 8, the Trump administration announced plans to loosen U.S. export controls on artificial intelligence (AI) chips to China by approving the sale of Nvidia H200 chips—the most powerful AI chip ever approved for export to China. That decision was driven in part by concerns that Huawei is becoming a viable competitor to Nvidia in AI chips, making U.S. export controls less effective. However, a comparison of publicly available data on AI chip performance from both companies, coupled with estimates on AI chip production capacity finds something different: Huawei is not a rising competitor. Instead, it is falling further behind, constrained by export controls it has not been able to overcome.
Nvidia and Huawei’s AI chip roadmaps from this year show that the performance gap between U.S. and Chinese AI chips is large and growing. The best U.S. AI chips are currently about five times more powerful than Huawei’s best offerings. By 2027, that gap will widen to seventeen times. Perhaps most striking: according to Huawei’s own public roadmap, the company’s next-generation chip in 2026 will actually be less powerful than its best chip today. This apparent regression could indicate that SMIC and other Chinese fabs are struggling to produce high-performing AI chips for Huawei at scale. With SMIC stuck at 7nm process technology due to U.S. and allied equipment export controls, Huawei has hit a ceiling it is struggling to break through.
More on:
Huawei’s strategy of compensating for inferior quality with higher quantity is also failing. Even under very aggressive assumptions about Huawei’s AI chip production capacity—that it will produce 800,000 AI chips in 2025 (double the highest public estimates), two million AI chips in 2026, and four million in 2027—it will not be enough. Huawei would still produce only about 5 percent of Nvidia’s aggregate AI computing power in 2025, falling to 4 percent in 2026 and 2 percent in 2027. It is virtually impossible for Huawei to close this gap: even a hundredfold increase in AI chip production by 2027 would not even bring Huawei to half of Nvidia’s output. Meanwhile, China's demand for AI compute is growing exponentially as models become more advanced, meaning the country’s AI chip shortage will become more acute over time, not less.
U.S. advantages in AI could significantly erode by loosening export controls on the H200 chip. If the United States exports three million H200 chips to China in 2026, it would give China more AI computing power than it could produce domestically until 2028 or 2029 at the earliest. This could enable China to build some of the largest AI data centers in the world, would help Chinese AI labs close the gap with leading U.S. models, and would enable China to build AI data centers globally that compete with U.S. AI infrastructure for the first time.
Huawei is not a threat that justifies loosening controls; it is evidence that the controls are working.
Background on Why the United States Controls Its AI Chips
In October 2022, the United States has banned the export to China of any AI chips equal to or more capable than the Nvidia A100 chip, which was released in 2020. The goal of those restrictions is to maximize the United States’ AI advantage over China by hindering China’s ability to develop or run AI models at scale, which requires aggregating enormous numbers of AI chips to create extremely large amounts of computing power.
The United States’ current edge in AI rests on one foundation: access to advanced computing power. In every other element of the AI stack—data, research talent, algorithmic innovation, applications, and electricity generation—China either equals or surpasses the United States. But the United States has a significant advantage over China in its ability to produce AI hardware, an advantage that is rapidly expanding. While U.S. and allied firms are leveraging their access to the most advanced chipmaking technology to produce large numbers of increasingly capable AI chips using the world’s most sophisticated production processes, China’s advanced chipmaking is constrained in quality and quantity due to U.S. and allied export controls on advanced chipmaking equipment (which were initiated under the first Trump administration and expanded under the Biden administration).
More on:
Furthermore, the amount of computing power required to develop and run the most advanced AI models continues to increase at an exponential rate. Since 2018, the amount of computing power needed for a frontier AI training run has increased by an average of 4.2x/year, doubling about every six months. As leading AI models require even larger amounts of computing power, the ability to produce both high-quality AI chips and very large volumes of those chips is extremely important. Doing just one or the other will be insufficient.
Evaluating the Capabilities of the Nvidia H200
The Nvidia H200 is far more powerful than any other AI chip that U.S. firms sell to China. A comparison of the Nvidia H20 (the best chip allowed to be sold to China before President Donald Trump approved H200 sales, which is a degraded H200 chip), the B30A (the China-specific version of Nvidia’s Blackwell chip, which Trump declined to sell to China), and the H200 is included in Figure 1. (See Appendix A for details explaining how total processing performance and memory bandwidth are relevant to AI training and inference.)
Figure 1: U.S. AI Chip Performance Comparison[1]
Analysis
- The H200 is substantially more powerful than any AI chip previously available for exports to China. It has over six times more processing power than the approved H20 chip, and nine times more processing power than the maximum levels permitted under U.S. export control thresholds.
- By one metric, the H200 is more capable than the proposed B30A Blackwell chip. The H200 has 20 percent more memory bandwidth than the B30A, and uses HBM3e memory, which is separately banned for export to China if exported as a stand-alone good.
- The H200 is far better at AI training than any other chip currently available in China. DeepSeek researchers concluded that the best Chinese AI chip, Huawei’s Ascend 910C, performs 60 percent as well as the Nvidia H100, a chip that is similar to the H200 but has lower memory bandwidth.
Forecasting the Quality of U.S. and Chinese AI Chips
A critical factor in the debate over exporting AI chips is China’s own AI chip production capabilities—both in terms of the quality and quantity of the chips it can produce. U.S. export controls prohibit Chinese firms from using overseas fabrication facilities (known as fabs), such as those operated by TSMC, to make advanced AI chips. This forces all Chinese AI chip production to occur at fabs within China. However, China’s most advanced fabs are far less advanced than the TSMC fabs that make Nvidia’s chips, and the production capacity of China’s most advanced fabs is also very limited relative to TSMC. This is largely due to U.S. and allied restrictions on the export of advanced semiconductor manufacturing equipment to China, which have limited China’s most advanced chip production capabilities. As a result, China’s ability to make high-performance AI chips lags far behind leading U.S. firms.
The current and future quality gap between U.S. and Chinese AI chips can be evaluated by comparing the public roadmaps of Nvidia and Huawei (China’s leading AI chip designer), which both companies published this year. On March 18, Nvidia CEO Jensen Huang outlined the company’s production roadmap through 2028, which included projected specifications for the chips that Nvidia plans to produce in 2026 and 2027.
On September 18, Huawei published its own three-year roadmap, with similarly detailed data for all the chips it plans to release through 2028.
The graphs in Figure 2 and Figure 3 compare those two roadmaps.[2] They list the capability of the best AI chip that Nvidia and Huawei sold, or has publicly stated that it anticipate selling, in the first half of each year from 2023 to 2029. Figure 2 compares the total processing performance (TPP) of each chip, and Figure 3 compares their memory bandwidth. (See Appendix B for a table that lists the underlying performance specifications of each Nvidia and Huawei chip.)
Figure 2: U.S. vs. China—Best AI Chip Performance Comparison (Total Processing Performance)
Figure 3: U.S. vs. China—Best AI Chip Performance Comparison (Memory Bandwidth)
Analysis
- The performance gap between the best U.S. and Chinese AI chips is significant and will expand substantially in the next two years. The best U.S. AI chips are currently about five times more powerful than the best Chinese AI chips, measured by TPP. By the second half of 2027, Nvidia’s best AI chips will be seventeen times more powerful than Huawei’s best AI chips.
- The capabilities of Huawei’s best AI chips are declining. According to Huawei’s own public roadmap, the Ascend 950PR and Ascend 950DT, both of which Huawei plans to release in 2026, have a lower TPP than Huawei’s best AI chip today, the Ascend 910C. This could indicate that Huawei is struggling to produce advanced AI chips domestically, and potentially that the vast majority of (or even all) Ascend 910B or C chips were illicitly made at TSMC in Taiwan and few or none were made at SMIC, China’s leading chip manufacturer.
- Huawei will not be able to make a chip more powerful than the H200 for two years. Huawei does not plan to produce a chip that has greater performance or memory bandwidth than the H200 until the Ascend 960 in Q4 2027 (which will likely be widely available in 2028). Although the Ascend 910C approaches the specifications of the H100 on paper, as noted earlier, the H100 is 60 percent better than the Ascend 910C in real-world performance, and China cannot make enough 910Cs to meet domestic demand.
- Huawei’s AI chips will not improve markedly in the coming years. This is likely because SMIC cannot currently produce chips at nodes more advanced than 7nm, given U.S. and allied export controls on production tools critical for more advanced processes. The Ascend 910B and C are already among the most powerful chips ever made at the 7nm node, and further performance gains will be increasingly difficult to obtain. The last Nvidia chip made at the 7nm node was the A100, which was released in 2020.
Forecasting U.S. and China Total AI Computing Power Production
The aggregate amount of AI computing power produced is the best single metric for evaluating any country’s overall AI hardware production capabilities. Aggregate AI computing power is determined by multiplying the performance of each chip produced (noted in the prior section) by the quantity of each chip produced.
Huawei acknowledges that its chips trail Nvidia’s in quality: its stated strategy is to produce inferior chips in large enough numbers to compete with Nvidia’s offerings in aggregate. This would require it to make extremely large numbers of inferior chips to make up for the quality gap.
There is some uncertainty regarding the number of AI chips China and Huawei currently make. In June, the U.S. Department of Commerce asserted that China would make two hundred thousand AI chips in 2025, while sources familiar with Huawei’s plans indicated that it would produce between three hundred thousand and four hundred thousand. Research firm SemiAnalysis produced another estimate, assessing that Huawei could produce as many as 1.5 million AI chip dies in 2025, but that it would only produce 200,000–300,000 completed AI chips due to a shortage of high-bandwidth memory, which the United States export controlled in December 2024. Additionally, TSMC illicitly made nearly three million Ascend AI chip dies for Huawei in 2023 and 2024, which creates further uncertainty regarding how many of the dies used in Ascend AI chips produced this year were fabricated at SMIC in China, versus by TSMC in Taiwan.
Meanwhile, Nvidia’s Huang stated that Nvidia would make four-to-five million AI chips in 2025, which he described as double its 2024 production. Separate reporting indicates that eighty percent of those chips would be current-generation Blackwell chips, and 20 percent would be the previous-generation Hopper chips.
This analysis includes two estimates regarding China’s future AI computing power production capabilities to account for uncertainty regarding its production quantity. First, it includes a median-case assessment regarding the number of chips Huawei can make, which aligns with the high-end of publicly available projections regarding Huawei’s AI chipmaking capacity. Second, it includes an aggressive assessment that assumes China’s AI chip production capacity is far greater than what has been publicly reported, and will reach much higher levels in the coming years. The analysis also includes relatively conservative assumptions for Nvidia’s production, assuming Nvidia will experience slower growth in chip production each of the next two years than it did in 2025.
Figure 4 forecasts the overall AI compute production capabilities of Nvidia and Huawei, including both median and aggressive scenarios for Huawei, pursuant to the below assumptions:
- Huawei (median-case assumptions): Aligns with the high-end of publicly available estimates for Huawei's 2025 production, and assumes continued growth. Specifically, assumes that China will produce: (1) four hundred thousand Ascend 910Cs in 2025; (2) one million AI chips in 2026, split equally between Ascend 910Cs and next-generation Ascend 950s; and (3) two million Ascend 950s in 2027.
- Huawei (high assumptions): Assumes that China can produce more AI chips this year than most analysts estimate, and that this production will scale rapidly over the next two years. Specifically, the analysis doubles the median assumptions, speculating that Huawei will produce (1) eight hundred thousand Ascend 910Cs in 2025, which would be twice its 2025 production targets and four times U.S. government estimates; (2) two million AI chips in 2026, split equally between Ascend 910Cs and Ascend 950s; and (3) four million Ascend 950s in 2027.
- Nvidia: Assumes that Nvidia: (1) produced 4.5 million AI chips in 2025, consistent with Huang’s statement; (2) will increase AI chip production capacity by 50 percent in 2026 to 6.75 million AI chips; (3) will increase capacity by 50 percent again in 2027 to 10.125 million AI chips; and (4) will introduce a new chip that accounts for 20 percent of its production in year one and 80 percent in year two.
Figure 4: Nvidia vs. Huawei—Yearly AI Compute Production
Analysis
- Even under the most aggressive assumptions, Huawei’s AI compute production is a small fraction of Nvidia’s and declines over time. For example, assuming an aggressive production scenario in which Huawei will make millions of AI chips in 2026, the company still only produces about 5 percent of the aggregate AI computing power as Nvidia. Under median-case assumptions regarding Huawei’s production capacity, Huawei produces under 3 percent of the AI computing power of Nvidia, which will fall to 1 percent by 2027.
- China’s strategy of producing larger numbers of worse chips will not be effective—it simply cannot make enough chips to make up the gap with U.S. firms, which have access to much more advanced chipmaking technology and much greater capacity at TSMC, and are increasing both the quality and quantity of chips they are producing. The current U.S. advantage in AI hardware will therefore persist and expand.
- Huawei will not be able to make many of the multi-rack solutions that it advertises as competitive with Nvidia's rack-scale systems. Huawei has aggressively marketed the CloudMatrix 384, which contains 384 Ascend 910C chips, and the upcoming Atlas 950 SuperPod, which contains 8,192 Ascend 950 chips, as competitors to Nvidia's rack-scale systems that contain 72 leading Nvidia chips. However, Huawei will simply not be able to make many of these multi-rack systems; even under the most aggressive chip production assumptions, Huawei will make under five thousand CloudMatrix systems and about six hundred SuperPods by 2027. Nvidia will make almost three hundred thousand comparable rack-scale systems over the same period.
- It is virtually impossible for Huawei to close the gap with Nvidia. Even if Huawei increased its AI chip production quantity by one hundred times between now and 2027 (which is virtually impossible), it would still produce less than 50 percent of the aggregate AI compute as Nvidia per year.
- China’s AI chip production is almost certainly not keeping up with the demand for AI compute. Even using aggressive assumptions, China’s AI compute production is increasing at a linear rate, but the demand for AI compute is increasing exponentially as models become more advanced. Given Nvidia can barely keep up with U.S. demand for AI compute, Huawei’s will not be able to keep up with China’s demand at a fraction of Nvidia’s production. Therefore, China will become even more compute constrained over time.
- China’s refusal to allow the import of certain U.S. AI chips, such as the H20, is almost certainly a negotiating tactic. Any U.S. AI chips would add critical AI compute capacity to Chinese firms, given that domestic AI chip production is so constrained. The Chinese government therefore either blocked H20 imports as a negotiating ploy to convince the United States to approve export of more advanced chips, or it does not fully appreciate how limited its domestic AI chip production capabilities are relative to demand. The former is far more likely.
Impact of H200 Exports on China's Domestic AI Computing Power
The Trump administration's decision to export H200 AI chips to China will provide much-needed AI computing power to China’s domestic AI firms, enabling Chinese AI models to close the gap with leading U.S. models. DeepSeek has repeatedly stated that its single biggest constraint is access to AI compute, including in December 2025 in a technical paper accompanying the release of DeepSeek v3.2. Large numbers of H200s would provide a real boost to China’s AI training capabilities, particularly given the amount of computing power needed to train next-generation models is rising at a faster rate than China’s AI compute production.
Figure 5 visualizes the impact that large-scale H200 exports would have on the amount of AI computing power that China is able to acquire. This assumes that Nvidia exports 3 million H200 chips to China in 2026 and 4.5 million in 2027, which would represent approximately 9 percent and 3 percent of all AI computing power produced by Nvidia each year, respectively. For reference, before export controls went into effect, Nvidia derived 20–25 percent of its revenue from China, which almost certainly represented more than 9 percent of all AI computing power produced by Nvidia. It also assumes China’s domestic AI compute production capacity is in line with the high-end production estimates described above, to ensure the analysis does not overstate the impact of H200 sales.
Figure 5: Impact of H200 Exports on China’s AI Computing Power
Analysis
- If Nvidia exports three million H200s to China in 2026, it would give China at least a two-to-three-year boost in the amount of AI computing power it is able to legally acquire domestically, and potentially more. With three million H200 exports, the amount of AI computing power acquired by China in 2026 would likely be greater than what China could otherwise make until 2028 or 2029, at the earliest—and potentially longer, if China’s AI chip production falls below the high-end estimates.
- Exporting large numbers of H200s to China could cause China to build some of the largest AI data centers in the world. Under the H200 export projections described above, if these chips are split equally between Alibaba, Tencent, Baidu, and ByteDance, each company would receive 750,000 H200s in 2026 and an additional 1,125,000 H200s in 2027. If each company created a single data center with all H200s it owned—which is possible given Chinese companies likely would use the new Nvidia chips primarily for training AI models, and AI training is easiest if all chips are located in a single location—that data center would be three times more powerful than the largest data center operating today (xAI's Colossus data center in Memphis, which contains the equivalent of about 275,000 H200s). While data centers larger than Colossus are under construction in the United States, China's leading data centers would remain competitive for the foreseeable future.
- This scenario will allow DeepSeek and others to close the gap with U.S. models much more quickly, particularly if China concentrates this computing power into a small number of locations.
Impact of H200 Exports on China's Ability to Build AI Infrastructure Globally
Exporting AI chips to China could also allow China to build data centers around the world that compete with AI infrastructure owned and operated by U.S. firms globally. Chinese cloud providers are not currently building large-scale data centers overseas, which has allowed U.S. hyperscalers to dominate the global market for AI infrastructure. There is no evidence of a single data center located outside of China that is owned and operated by a company headquartered in China and includes a large cluster of AI chips.
The financials of U.S. cloud providers versus their Chinese peers makes clear that while U.S. companies plan to spend extremely large sums of money to support a large-scale AI infrastructure buildout, their Chinese competitors largely do not. Figure 5 compares the past and anticipated spending of U.S. and Chinese cloud computing providers, assuming U.S. export controls on AI chips remain in effect.
Figure 6: U.S. vs. Chinese Hyperscalers—Capital Expenditures
Analysis
- U.S. cloud computing providers are currently investing seven times more money in AI data center construction than their Chinese competitors. Goldman projects this trend will increase to nine times by 2027 as the capital expenditure of U.S. firms rises substantially, while the capital expenditure of China’s hyperscalers (such as Alibaba, Tencent, Baidu, and ByteDance) will remain largely stagnant.
- The primary explanation for the projected low CapEx of Chinese hyperscalers relative to their U.S. competitors is simple: Chinese companies do not have access to large quantities of AI chips, which are needed to power advanced data centers. U.S. export controls permit those providers to rent access to U.S. AI chips via the cloud, but they do not permit them to own and operate the chips themselves.
- China's low AI chip production numbers are insufficient to facilitate both the construction of large-scale data centers domestically and also overseas. To illustrate this, the thirty-five thousand GB300 chips that the United States plans to sell to the United Arab Emirates alone in 2025 is equivalent in AI processing power to half of all Huawei AI chips produced in China this year (using the high-end assumptions about China’s AI chip production capabilities included in this article). As China’s production falls further behind leading U.S. firms, it is even more unlikely that China would allocate any of its scarce domestically produced compute to overseas data centers.
- Loosening controls now would significantly increase the amount of AI computing power owned by Chinese hyperscalers, which would allow China to build data centers around the world that compete with U.S. AI infrastructure globally. China’s Belt and Road Initiative has largely been locked out of the AI revolution due to U.S. export controls. Loosening controls would allow China to offer lower-cost, subsidized data centers—either using newly approved U.S. AI chips or Chinese AI chips that it can now afford to export without constraining its domestic AI buildout. This opportunity could be particularly attractive to developing countries or those looking for an alternative to U.S. cloud providers.
Conclusion
The data is clear: Huawei is not a viable competitor to Nvidia, and claims to the contrary are not supported by the evidence. Huawei’s AI chips are significantly less powerful than their U.S. counterparts, the gap is widening rather than narrowing, and Huawei cannot produce anywhere near enough chips to compensate for this quality deficit. According to Huawei's own public roadmap, its next-generation chip in 2026 will actually be worse than its best chip today. Even if Huawei vastly exceeds its publicly reported production expectations and makes millions of Ascend chips next year, it will still produce only around 4 percent of the aggregate AI computing power that Nvidia produces. China’s strategy of producing larger quantities of inferior chips is not working, and the fundamental constraints imposed by U.S. and allied export controls on semiconductor manufacturing equipment ensure that this will not change in the foreseeable future.
Given those realities, the Trump administration’s decision to loosen export controls on AI chips represents a massive risk. Approving large-scale exports of H200 chips to China would provide Chinese AI firms with substantially more computing power than they can produce domestically, potentially increasing the amount of AI computing power coming online in China next year to levels that China would not be able to reach until 2028 or 2029 if export controls had stayed in effect. This could enable China to build some of the largest AI data centers in the world, cause Chinese AI labs to close the gap with leading U.S. AI models, and empower China to build competitive AI infrastructure globally, undermining the strategic advantages the United States has carefully cultivated through its export control regime.
The United States holds a commanding lead in AI hardware production, and that lead is growing. There is no strategic rationale for giving it away.
Appendix A: Key Characteristics and Control Parameters of AI Chips
To determine whether a chip is useful for developing AI models (a process also known as training) or running AI models (a process also known as inference), there are two critical variables. First, a chip must be able to process large amounts of data extremely quickly. And second, it must be able to move large amounts of data in and out of the chip extremely quickly. Therefore, U.S. export controls ban the export of chips to China that exceed either of two technical parameters:[3]
- Total Processing Performance (TPP): This measures how many calculations a chip can complete per second.
- Memory Bandwidth: This measures how much information can flow on and off the chip per second.
For AI training, chips require high TPP and high memory bandwidth. This is because AI training requires aggregating enormous numbers of AI chips to create as much overall computing power as possible, and using them all simultaneously over an extended period of time to develop the model. For AI inference, memory bandwidth is often more important than TPP. Queries of existing AI models use comparatively less processing power and are conducted much more rapidly than training, but because they still use multiple chips simultaneously, high memory bandwidth to allow for rapid communication and transfer of data is still essential.
Appendix B: Table with Specifications of Nvidia and Huawei AI Chips
Note: Chips in italics are not yet released, but anticipated specifications are drawn from Nvidia’s and Huawei’s public roadmaps, as well as further reporting on Nvidia’s projected capabilities from SemiAnalysis.
* The Ascend 910A was only fabricated at TSMC, and production halted in 2020 when the Trump administration blocked Huawei from fabricating chips at TSMC. TSMC also fabricated nearly three million dies for Ascend 910B and 910C chips; the number of Ascend 910B and 910C chips that have been both fabricated and packaged in China (as opposed to by TSMC) is unknown.
Appendix C: Areas for Further Analysis
There are several factors relevant to AI computing power production that warrant further analysis. Accounting for these additional factors would likely show an even larger U.S. lead over China.
This analysis is limited to Nvidia and Huawei, which are the leading AI chip firms in each country and represent a majority of the AI compute produced by each country. However, both countries’ AI industries have other players (e.g., AMD and Google in the United States, and Cambricon and Moore Threads in China), which should be considered as part of a fulsome analysis. Such analysis would likely further emphasize the United States’ advantage, as Huawei likely represents a larger percentage of China’s overall AI compute production than Nvidia does for the United States. For example, Cambricon hopes to sell 300,000 AI chips next year, and its best chip (the Siyuan 590) is 80% of the performance of the Nvidia A100; if Cambricon produced 300,000 of these chips and Huawei hit its high-end production numbers in this analysis, the total AI computing power of all Cambricon chips produced in 2025 would be 12% of all Huawei AI chips. In contrast, Google likely will make around 3 million AI chips in 2025; assuming one third of these were Google's most advanced TPU v7 and two thirds were its previous-generation TPU v6, the total AI computing power of all Google AI chips produced in 2025 would be 34% of all Nvidia AI chips.
Further detailed analysis regarding the precise quantity of AI chips that China can produce would reduce the uncertainty and refine the conclusions. China is unlikely to significantly exceed the high estimates in this analysis, and given its chips are decreasing in quality there is a possibility it is facing substantial domestic production challenges that would cause it to fall short of the median-case estimates, this analysis is more likely to further skew the overall assessment toward the United States.
Additionally, this analysis evaluates each AI chip on its advertised TPP, as defined by the manufacturer, and does not incorporate real-world performance testing. Including performance testing would likely further skew the analysis toward the United States, as Huawei’s Ascend chips are known to be substantially less reliable than Nvidia’s chips and may not meet stated performance parameters. DeepSeek's acknowledgement that the Ascend 910C only performs 60 percent as well as the H200, despite having 80 percent of the TPP, is evidence in support of this.
Finally, by solely relying on TPP, this analysis also does not incorporate performance benefits that result from a chip’s ability to use 8-bit or 4-bit floating-point precision (FP8 or FP4), which significantly improve efficiency, particularly for AI inference. Incorporating this information would further skew the overall production capability analysis toward the United States, as Huawei’s best chips today lack native FP8 or FP4 support, while the H200 chip has FP8 support and the Blackwell chips have FP8 and FP4 support.